Pierrotlc's group workspace
Base - 32x32
What makes this group special?
Tags
absurd-energy-8
Notes
Author
State
Finished
Start time
January 17th, 2022 1:58:12 PM
Runtime
35m 18s
Tracked hours
35m 9s
Run path
pierrotlc/AnimeStyleGAN/y0kscy1j
OS
Linux-5.15.11-76051511-generic-x86_64-with-glibc2.10
Python version
3.8.5
Git repository
git clone git@github.com:Futurne/AnimeStyleGAN.git
Git state
git checkout -b "absurd-energy-8" 19b947ae7b438e150bc3b904a4fb91c5f1ad4b55
Command
launch_training.py
System Hardware
| CPU count | 16 |
| GPU count | 1 |
| GPU type | NVIDIA GeForce RTX 3080 Laptop GPU |
W&B CLI Version
0.12.9
Group
Base - 32x32Config
Config parameters are your model's inputs. Learn more
- {} 18 keys▶
- 128
- "cuda"
- 32
- 32
- 15
- 0.0001
- 0.00005
- 128
- 8
- 3
- "Discriminator( (first_conv): Conv2d(3, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (blocks): ModuleList( (0): DiscriminatorBlock( (convs): Sequential( (0): Conv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.01) (3): Conv2d(8, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.01) ) (downsample): Conv2d(8, 16, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (1): DiscriminatorBlock( (convs): Sequential( (0): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.01) (3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.01) ) (downsample): Conv2d(16, 32, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (2): DiscriminatorBlock( (convs): Sequential( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.01) (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.01) ) (downsample): Conv2d(32, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (3): DiscriminatorBlock( (convs): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.01) (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.01) ) (downsample): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) (4): DiscriminatorBlock( (convs): Sequential( (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): LeakyReLU(negative_slope=0.01) (3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): LeakyReLU(negative_slope=0.01) ) (downsample): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) ) ) (classify): Sequential( (0): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (1): Flatten(start_dim=1, end_dim=-1) ) )"
- "StyleGAN( (mapping): MappingNetwork( (norm): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (fully_connected): ModuleList( (0): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (1): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) (2): Sequential( (0): Linear(in_features=32, out_features=32, bias=True) (1): LayerNorm((32,), eps=1e-05, elementwise_affine=True) (2): LeakyReLU(negative_slope=0.01) ) ) (out_layer): Linear(in_features=32, out_features=32, bias=True) ) (synthesis): SynthesisNetwork( (blocks): ModuleList( (0): SynthesisBlock( (upsample): Upsample(scale_factor=2.0, mode=nearest) (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (ada_in): AdaIN() ) (1): SynthesisBlock( (upsample): Upsample(scale_factor=2.0, mode=nearest) (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (ada_in): AdaIN() ) (2): SynthesisBlock( (upsample): Upsample(scale_factor=2.0, mode=nearest) (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (ada_in): AdaIN() ) (3): SynthesisBlock( (upsample): Upsample(scale_factor=2.0, mode=nearest) (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (ada_in): AdaIN() ) ) (to_rgb): Conv2d(128, 3, kernel_size=(1, 1), stride=(1, 1)) ) )"
- "Adam ( Parameter Group 0 amsgrad: False betas: (0.9, 0.999) eps: 1e-08 lr: 0.0001 weight_decay: 0 )"
- "Adam ( Parameter Group 0 amsgrad: False betas: (0.9, 0.999) eps: 1e-08 lr: 5e-05 weight_decay: 0 )"
- 0
- "<torch.utils.data.dataloader.DataLoader object at 0x7feac370e6d0>"
- "<torch.utils.data.dataloader.DataLoader object at 0x7feac370e5e0>"
- 0.1
Summary
Summary metrics are your model's outputs. Learn more
- {} 15 keys▶
- {} 7 keys▶
- 0.19997230435118957
- 1.6667143246706797
- 0.8505897101233987
- 0.1654195356018403
- 1.6832562685012815
- 0.8014907591483172
- 0.22499478652196772
- 0.19859696091397813
- 1.6734108305291129
- 0.8518471129630741
- 0.16384658550745562
- 1.6897954909425033
- 0.802024860523249
- 0.22426086517148897
Artifact Outputs
This run produced these artifacts as outputs. Total: 3. Learn more
Type
Name
Consumer count
Loading...